该技术报告描述了在Robocup SPL(Mario)中计算视觉统计的模块化且可扩展的体系结构,该结构在Robocup 2022的SPL Open Research Challenge期间提出,该挑战在曼谷(泰国)举行。马里奥(Mario)是一个开源的,可用的软件应用程序,其最终目标是为Robocup SPL社区的发展做出贡献。Mario带有一个GUI,该GUI集成了多个机器学习和基于计算机视觉的功能,包括自动摄像机校准,背景减法,同型计算,玩家 +球跟踪和本地化,NAO机器人姿势估计和跌落检测。马里奥(Mario)被排名第一。1在开放研究挑战中。
translated by 谷歌翻译
在建立工程基础设施的预测模型时,提出了人群级分析来解决数据稀疏性。利用可解释的层次贝叶斯方法和操作车队数据,域专业知识是自然编码(并适当共享)在不同的子组之间,代表(i)使用型,(ii)组件或(iii)操作条件。具体而言,利用领域专业知识来通过假设(和先前的分布)来限制模型,从而使该方法可以自动共享相似资产之间的信息,从而改善了对风电场中卡车机队和权力预测的生存分析。在每个资产管理示例中,在合并的推理中学习了一组相关的功能,以学习人口模型。当允许子型在层次结构中的不同级别共享相关信息时,参数估计得到改善。反过来,数据不完整的组会自动从数据丰富的组中借用统计强度。统计相关性使知识转移能够通过贝叶斯转移学习,并且可以检查相关性,以告知哪些资产共享有关哪些效果(即参数)的信息。两种案例研究的成功都证明了实践基础设施监测的广泛适用性,因为该方法自然适应了不同原位示例的可解释的车队模型。
translated by 谷歌翻译
Reservoir computing is a recurrent neural network paradigm in which only the output layer is trained. Recently, it was demonstrated that adding time-shifts to the signals generated by a reservoir can provide large improvements in performance accuracy. In this work, we present a technique to choose the optimal time shifts. Our technique maximizes the rank of the reservoir matrix using a rank-revealing QR algorithm and is not task dependent. Further, our technique does not require a model of the system, and therefore is directly applicable to analog hardware reservoir computers. We demonstrate our time-shift optimization technique on two types of reservoir computer: one based on an opto-electronic oscillator and the traditional recurrent network with a $tanh$ activation function. We find that our technique provides improved accuracy over random time-shift selection in essentially all cases.
translated by 谷歌翻译
心脏听诊是用于检测和识别许多心脏病的最具成本效益的技术之一。基于Auscultation的计算机辅助决策系统可以支持他们的决定中的医生。遗憾的是,在临床试验中的应用仍然很小,因为它们中的大多数仅旨在检测音盲局部信号中的额外或异常波的存在,即,仅提供二进制地面真理变量(普通VS异常)。这主要是由于缺乏大型公共数据集,其中存在对这种异常波(例如,心脏杂音)的更详细描述。为基于听诊的医疗建议系统铺平了更有效的研究,我们的团队准备了目前最大的儿科心声数据集。从1568名患者的四个主要听诊位置收集了5282个录音,在此过程中,手动注释了215780人的心声。此外,并且首次通过专家注释器根据其定时,形状,俯仰,分级和质量来手动注释每个心脏杂音。此外,鉴定了杂音的听诊位置以及杂音更集中检测到杂音的位置位置。对于相对大量的心脏声音的这种详细描述可以为新机器学习算法铺平道路,该算法具有真实世界的应用,用于检测和分析诊断目的的杂波。
translated by 谷歌翻译
目前,由精确的径向速度(RV)观察结果受到恒星活性引入的虚假RV信号的限制。我们表明,诸如线性回归和神经网络之类的机器学习技术可以有效地从RV观测中删除活动信号(由于星形/张图引起的)。先前的工作着重于使用高斯工艺回归等建模技术仔细地过滤活性信号(例如Haywood等人,2014年)。取而代之的是,我们仅使用对光谱线平均形状的更改进行系统地删除活动信号,也没有有关收集观测值的信息。我们对模拟数据(使用SOAP 2.0软件生成; Dumusque等人,2014年生成)和从Harps-N太阳能望远镜(Dumusque等,2015; Phillips等人2015; 2016; Collier训练)培训了机器学习模型。 Cameron等人2019)。我们发现,这些技术可以从模拟数据(将RV散射从82 cm/s提高到3 cm/s)以及从HARPS-N太阳能望远镜中几乎每天进行的600多种真实观察结果来预测和消除恒星活动(将RV散射从82 cm/s提高到3 cm/s)。 (将RV散射从1.753 m/s提高到1.039 m/s,提高了约1.7倍)。将来,这些或类似的技术可能会从太阳系以外的恒星观察中去除活动信号,并最终有助于检测到阳光状恒星周围可居住的区域质量系外行星。
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
One of the major challenges in Deep Reinforcement Learning for control is the need for extensive training to learn the policy. Motivated by this, we present the design of the Control-Tutored Deep Q-Networks (CT-DQN) algorithm, a Deep Reinforcement Learning algorithm that leverages a control tutor, i.e., an exogenous control law, to reduce learning time. The tutor can be designed using an approximate model of the system, without any assumption about the knowledge of the system's dynamics. There is no expectation that it will be able to achieve the control objective if used stand-alone. During learning, the tutor occasionally suggests an action, thus partially guiding exploration. We validate our approach on three scenarios from OpenAI Gym: the inverted pendulum, lunar lander, and car racing. We demonstrate that CT-DQN is able to achieve better or equivalent data efficiency with respect to the classic function approximation solutions.
translated by 谷歌翻译
To make machine learning (ML) sustainable and apt to run on the diverse devices where relevant data is, it is essential to compress ML models as needed, while still meeting the required learning quality and time performance. However, how much and when an ML model should be compressed, and {\em where} its training should be executed, are hard decisions to make, as they depend on the model itself, the resources of the available nodes, and the data such nodes own. Existing studies focus on each of those aspects individually, however, they do not account for how such decisions can be made jointly and adapted to one another. In this work, we model the network system focusing on the training of DNNs, formalize the above multi-dimensional problem, and, given its NP-hardness, formulate an approximate dynamic programming problem that we solve through the PACT algorithmic framework. Importantly, PACT leverages a time-expanded graph representing the learning process, and a data-driven and theoretical approach for the prediction of the loss evolution to be expected as a consequence of training decisions. We prove that PACT's solutions can get as close to the optimum as desired, at the cost of an increased time complexity, and that, in any case, such complexity is polynomial. Numerical results also show that, even under the most disadvantageous settings, PACT outperforms state-of-the-art alternatives and closely matches the optimal energy cost.
translated by 谷歌翻译
Prescriptive Process Monitoring systems recommend, during the execution of a business process, interventions that, if followed, prevent a negative outcome of the process. Such interventions have to be reliable, that is, they have to guarantee the achievement of the desired outcome or performance, and they have to be flexible, that is, they have to avoid overturning the normal process execution or forcing the execution of a given activity. Most of the existing Prescriptive Process Monitoring solutions, however, while performing well in terms of recommendation reliability, provide the users with very specific (sequences of) activities that have to be executed without caring about the feasibility of these recommendations. In order to face this issue, we propose a new Outcome-Oriented Prescriptive Process Monitoring system recommending temporal relations between activities that have to be guaranteed during the process execution in order to achieve a desired outcome. This softens the mandatory execution of an activity at a given point in time, thus leaving more freedom to the user in deciding the interventions to put in place. Our approach defines these temporal relations with Linear Temporal Logic over finite traces patterns that are used as features to describe the historical process data recorded in an event log by the information systems supporting the execution of the process. Such encoded log is used to train a Machine Learning classifier to learn a mapping between the temporal patterns and the outcome of a process execution. The classifier is then queried at runtime to return as recommendations the most salient temporal patterns to be satisfied to maximize the likelihood of a certain outcome for an input ongoing process execution. The proposed system is assessed using a pool of 22 real-life event logs that have already been used as a benchmark in the Process Mining community.
translated by 谷歌翻译
社交机器人是一种自主机器人,通过参与其协作角色附带的社会情感行为,技能,能力和规则,与人们互动。为了实现这些目标,我们认为建模与用户的互动并将机器人行为调整为用户本人对其社会角色至关重要。本文提出了我们首次尝试将用户建模功能集成到社交和情感机器人中。我们提出了一种基于云的体系结构,用于建模用户机器人交互,以便使用不同类型的社交机器人重复使用该方法。
translated by 谷歌翻译